2,704 research outputs found

    Simulations of Strong Gravitational Lensing with Substructure

    Full text link
    Galactic sized gravitational lenses are simulated by combining a cosmological N-body simulation and models for the baryonic component of the galaxy. The lens caustics, critical curves, image locations and magnification ratios are calculated by ray-shooting on an adaptive grid. When the source is near a cusp in a smooth lens' caustic the sum of the magnifications of the three closest images should be close to zero. It is found that in the observed cases this sum is generally too large to be consistent with the simulations implying that there is not enough substructure in the simulations. This suggests that other factors play an important role. These may include limited numerical resolution, lensing by structure outside the halo, selection bias and the possibility that a randomly selected galaxy halo may be more irregular, for example due to recent mergers, than the isolated halo used in this study. It is also shown that, with the level of substructure computed from the N-body simulations, the image magnifications of the Einstein cross type lenses are very weak functions of source size up to \sim 1\kpc. This is also true for the magnification ratios of widely separated images in the fold and cusp caustic lenses. This means that selected magnification ratios for different the emission regions of a lensed quasar should agree with each other, barring microlensing by stars. The source size dependence of the magnification ratio between the closest pair of images is more sensitive to substructure.Comment: 28 pages, 2 tables and 14 figures. Accepted to MNRA

    Effects of baryons on weak lensing peak statistics

    Full text link
    Upcoming weak-lensing surveys have the potential to become leading cosmological probes provided all systematic effects are under control. Recently, the ejection of gas due to feedback energy from active galactic nuclei (AGN) has been identified as major source of uncertainty, challenging the success of future weak-lensing probes in terms of cosmology. In this paper we investigate the effects of baryons on the number of weak-lensing peaks in the convergence field. Our analysis is based on full-sky convergence maps constructed via light-cones from NN-body simulations, and we rely on the baryonic correction model of Schneider et al. (2019) to model the baryonic effects on the density field. As a result we find that the baryonic effects strongly depend on the Gaussian smoothing applied to the convergence map. For a DES-like survey setup, a smoothing of θk≳8\theta_k\gtrsim8 arcmin is sufficient to keep the baryon signal below the expected statistical error. Smaller smoothing scales lead to a significant suppression of high peaks (with signal-to-noise above 2), while lower peaks are not affected. The situation is more severe for a Euclid-like setup, where a smoothing of θk≳16\theta_k\gtrsim16 arcmin is required to keep the baryonic suppression signal below the statistical error. Smaller smoothing scales require a full modelling of baryonic effects since both low and high peaks are strongly affected by baryonic feedback.Comment: 22 pages, 11 figures, JCAP accepte

    PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data

    Full text link
    The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and PSF subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. We aim at developing a generic and modular data reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and in particular well suitable for the 3-5 micron wavelength range where typically (ten) thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. PynPoint is written in Python 2.7 and applies various image processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF subtraction tool based on PCA. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L' and M' data of beta Pic b and reassessed the planet's brightness and position with an MCMC analysis, and we provide a derivation of the photometric error budget.Comment: 16 pages, 9 figures, accepted for publication in A&A, PynPoint is available at https://github.com/PynPoint/PynPoin

    Probe combination in large galaxy surveys: application of Fisher information and Shannon entropy to weak lensing

    Get PDF
    This paper aims at developing a better understanding of the structure of the information that is contained in galaxy surveys, so as to find optimal ways to combine observables from such surveys. We first show how Jaynes' Maximum Entropy Principle allows us, in the general case, to express the Fisher information content of data sets in terms of the curvature of the Shannon entropy surface with respect to the relevant observables. This allows us to understand the Fisher information content of a data set, once a physical model is specified, independently of the specific way that the data will be processed, and without any assumptions of Gaussianity. This includes as a special case the standard Fisher matrix prescriptions for Gaussian variables widely used in the cosmological community, for instance for power spectra extraction. As an application of this approach, we evaluate the prospects of a joint analysis of weak lensing tracers up to the second order in the shapes distortions, in the case that the noise in each probe can be effectively treated as model-independent. These include the magnification, and the two ellipticity and four flexion fields. At the two-point level, we show that the only effect of treating these observables in combination is a simple scale-dependent decrease in the noise contaminating the accessible spectrum of the lensing E-mode. We provide simple bounds to its extraction by a combination of such probes as well as its quantitative evaluation when the correlations between the noise variables for any two such probes can be ignore

    Photo-z Performance for Precision Cosmology

    Full text link
    Current and future weak lensing surveys will rely on photometrically estimated redshifts of very large numbers of galaxies. In this paper, we address several different aspects of the demanding photo-z performance that will be required for future experiments, such as the proposed ESA Euclid mission. It is first shown that the proposed all-sky near-infrared photometry from Euclid, in combination with anticipated ground-based photometry (e.g. PanStarrs-2 or DES) should yield the required precision in individual photo-z of sigma(z) < 0.05(1+z) at I_AB < 24.5. Simple a priori rejection schemes based on the photometry alone can be tuned to recognise objects with wildly discrepant photo-z and to reduce the outlier fraction to < 0.25% with only modest loss of otherwise usable objects. Turning to the more challenging problem of determining the mean redshift of a set of galaxies to a precision of 0.002(1+z) we argue that, for many different reasons, this is best accomplished by relying on the photo-z themselves rather than on the direct measurement of from spectroscopic redshifts of a representative subset of the galaxies. A simple adaptive scheme based on the statistical properties of the photo-z likelihood functions is shown to meet this stringent systematic requirement. We also examine the effect of an imprecise correction for Galactic extinction and the effects of contamination by fainter over-lapping objects in photo-z determination. The overall conclusion of this work is that the acquisition of photometrically estimated redshifts with the precision required for Euclid, or other similar experiments, will be challenging but possible. (abridged)Comment: 16 pages, 11 figures; submitted to MNRA

    Measuring dark matter substructure with galaxy-galaxy flexion statistics

    Get PDF
    It is of great interest to measure the properties of substructures in dark matter haloes at galactic and cluster scales. Here we suggest a method to constrain substructure properties using the variance of weak gravitational flexion in a galaxy-galaxy lensing context; this is a statistical method, requiring many foreground-background pairs of galaxies. We show the effectiveness of flexion variance in measuring substructures in N-body simulations of dark matter haloes, and present the expected galaxy-galaxy lensing signals. We show the insensitivity of the method to the overall galaxy halo mass, and predict the method's signal-to-noise ratio for a space-based all-sky survey, showing that the presence of substructure down to 109 M⊙ haloes can be reliably detecte

    Photo-z performance for precision cosmology

    Get PDF
    Current and future weak-lensing surveys will rely on photometrically estimated redshifts of very large numbers of galaxies. In this paper, we address several different aspects of the demanding photo-z performance that will be required for future experiments, such as the proposed ESA Euclid mission. It is first shown that the proposed all-sky near-infrared photometry from Euclid, in combination with anticipated ground-based photometry (e.g. PanStarrs-2 or DES) should yield the required precision in individual photo-z of σz(z) ≤ 0.05(1 +z) at IAB≤ 24.5. Simple a priori rejection schemes based on the photometry alone can be tuned to recognize objects with wildly discrepant photo-z and to reduce the outlier fraction to ≤0.25 per cent with only modest loss of otherwise usable objects. Turning to the more challenging problem of determining the mean redshift 〈z〉 of a set of galaxies to a precision of |Δ〈z〉| ≤ 0.002(1 +z) we argue that, for many different reasons, this may be best accomplished by relying on the photo-z themselves rather than on the direct measurement of 〈z〉 from spectroscopic redshifts of a representative subset of the galaxies, as has usually been envisaged. We present in Appendix A an analysis of the substantial difficulties in the latter approach that arise from the presence of large-scale structure in spectroscopic survey fields. A simple adaptive scheme based on the statistical properties of the photo-z likelihood functions is shown to meet this stringent systematic requirement, although further tests on real data will be required to verify this. We also examine the effect of an imprecise correction for Galactic extinction on the photo-z and the precision with which the Galactic extinction can be determined from the photometric data itself, for galaxies with or without spectroscopic redshifts. We also explore the effects of contamination by fainter overlapping objects in photo-z determination. The overall conclusion of this paper is that the acquisition of photometrically estimated redshifts with the precision required for Euclid, or other similar experiments, will be challenging but possibl

    Magnetic structure of antiferromagnetic NdRhIn5

    Get PDF
    The magnetic structure of antiferromagnetic NdRhIn5 has been determined using neutron diffraction. It has a commensurate antiferromagnetic structure with a magnetic wave vector (1/2,0,1/2) below T_N = 11K. The staggered Nd moment at 1.6K is 2.6mu_B aligned along the c-axis. We find the magnetic structure to be closely related to that of its cubic parent compound NdIn3 below 4.6K. The enhanced T_N and the absence of additional transitions below T_N for NdRhIn5 are interpreted in terms of an improved matching of the crystalline-electric-field (CEF), magnetocrystalline, and exchange interaction anisotropies. In comparison, the role of these competing anisotropies on the magnetic properties of the structurally related compound CeRhIn5 is discussed.Comment: 4 pages, 4 figure
    • …
    corecore